A New Framework for Knowledge Revision of Abductive Agents Through Their Interaction

نویسندگان

  • Andrea Bracciali
  • Paolo Torroni
چکیده

In this paper we discuss the design of a knowledge revision framework for abductive reasoning agents, based on interaction. This involves issues such as: how to exploit knowledge multiplicity to find solutions to problems that agents may not individually solve, what information must be passed or requested, how agents can take advantage of the answers that they obtain, and how they can revise their reasoning process as a consequence of interacting with each other. We describe a novel negotiation framework in which agents will be able to exchange not only abductive hypotheses but also meta-knowledge, which, in particular in this paper, is understood as agents’ integrity constraints. We formalise some aspects of such a framework, by introducing an algebra of integrity constraints, aimed at formally supporting the updating/revising process of the agent knowledge. 1 Multiple-Source Knowledge and Coordinated Reasoning The agent metaphor has recently become a very popular way to model distributed systems, in many application domains that require a goal directed behaviour of autonomous entities. Thanks also to the recent explosion of the Internet and communication networks, the increased accessibility of knowledge located in different sources at a relatively low cost is opening up interesting scenarios where communication and knowledge sharing can be a constant support to the reasoning activity of agents. In knowledge-intensive applications, the agent paradigm will be able to enhance traditional stand-alone expert systems interacting with end-users, by allowing for inter-agent communication and autonomous revision of knowledge. Some agent-based solutions can be already found in areas such as information and knowledge integration (see the Sage and Find Future projects by Fujitsu), Business Process Management (Agentis Software), the Oracle Intelligent Agents, not to mention decentralized control and scheduling, and e-procurement (RockJ. Dix and J. Leite (Eds.): CLIMA IV, LNAI 3259, pp. 159–177, 2004. c © Springer-Verlag Berlin Heidelberg 2004 160 A. Bracciali and P. Torroni well Automation, Whitestein Technologies and formerly Living Systems AG, Lost Wax, iSOCO), just to cite some.1 In order to make such solutions reliable, easy to control, to specify and verify, and in order to make their behaviour easy to understand, sound and formal foundations are needed. This reason has recently motivated several Logic Programming based approaches to Multi-Agent Systems. Work done by Kowalski and Sadri [1] on the agent cycle, by Leite et al. [2] on combining several NonMonotonic Reasoning mechanisms in agents, by Satoh et al. [3] on speculative computation, by Dell’Acqua et al. [4, 5] on agent communication and updates, by Sadri et al. [6] on agent dialogues, and by Ciampolini et al. [7] and by Gavanelli et al. [8] on the coordination of reasoning of abductive logic agents, are only some examples of application of Logic Programming techniques to Multi-Agent Systems. A common characteristic among them is that the agent paradigm brings about the need for dealing with knowledge incompleteness (due to the multiplicity and autonomy of agents), and evolution (due to their interactions). In this research effort, many proposals have been put forward that consider negotiation and dialogue a suitable way to let agents exchange information and solve problems in a collaborative way, and that consider abduction as a privileged form of reasoning under incomplete information. However, such information exchange is often limited to simple facts that help agents revise their beliefs. In [7], for instance, such facts are modelled as hypotheses made to explain some observation in a coordinated abductive reasoning activity; in [3] the information exchanged takes the form of answers to questions aimed at confirming/disconfirming assumptions; in [6] of communication acts in a negotiation setting aimed at sharing resources. In [4] and previous work, the authors present a combination of abduction and updates in a multi-agent setting, where agents are able to propose updates to the theory of each other in different patterns. In this scenario of collaboration among abductive agents, knowledge exchange, update and revision play a key role. We argue that agents would benefit from the capability of exchanging information in its various forms, like predicates, theories and integrity constraints, and that they should be able to do it as a result of a negotiation process regarding knowledge itself. Finally, the relevance of abduction in proposing revisions during theory refinements and updates is widely recognised. In [9], for instance, it is shown how to combine abduction with inductive techniques in order to “adapt” a knowledge base to a set of empirical data and examples. Abductive explanations are used to generalise or specify the rules of the knowledge base according to positive and negative information provided. In agreement with the basis of this approach, we discuss knowledge sharing mechanisms for agents based on an abductive framework, where information is provided by the interaction between agents, which exchange not only facts, but also constraints about their knowledge, namely in1 A summary about industrial applications of agent technology, including references to the above mentioned projects and applications, can be found at: http://lia.deis.unibo.it/~pt/misc/AIIA03-review.pdf. A New Framework for Knowledge Revision of Abductive Agents 161 tegrity constraints of an abductive theory. Agent knowledge bases can hence be generalised or specialised by relaxing or tightening their integrity constraints. In this paper we focus on information exchange about integrity constraints, abstracting away from issues such as ontology and communication languages and protocols: we assume that all agents have a common ontology and communicate by using the same language. In this scenario, autonomous agents will actively ask for knowledge, e.g. facts, hypotheses or integrity constraints, and will autonomously decide whether and how to modify their own constraints whenever it is needed. For instance, an agent, which is unable to explain some observation given its current knowledge, will try to collect information from other agents, and possibly decide to relax its own constraints in a way that allows him to explain the observation. Conversely, an agent may find out that some assumptions that he made ought to be inconsistent (for instance, due to “social constraints” [10]), and try to gather information about how to tighten his own constraints or add new ones which prevent him from making such assumptions. The distinguishing features of the distributed reasoning revision paradigm that we envisage consist of a mix of introspection capabilities and communication capabilities. In the style of abductive reasoning, agents are able to provide conditional explanations about the facts that they prove, they are able to communicate such explanations in order to let others validate them, and they are able to single out and communicate the constraints that prevent from or allow them to explain an observation. The following example informally illustrates such a scenario. Example 1. Let us consider an interaction among two agents, A and B, having different expertise about a given topic. (1) A |=IC ¬f, b Agent A is unable to prove (find an explanation for) the goal (observation) “there is a bird that does not f ly”, and he is able to determine a set IC′′ of integrity constraints which prevent to explain the observation . . . (2) A → B : ¬f, b . . . hence A asks B for a possible explanation . . . (3) B |=IC ∆ ¬f, b . . .B is able to explain the observation ¬f, b (e.g., by assuming a set of hypotheses ∆ including p: a penguin is a bird that does not fly), and also to determine a (significant) set IC′ of integrity constraints involved in the abductive proof; (4) B → A : IC′ B suggests IC′ to A; (5) A IC′′⊕IC′ |=∆′ ¬f, b A revises his own constraints according to the information provided by B, by means of some updating operation, and explains his observation, possibly with a different explanation ∆′. At step (5), the updating operation has been accepted by A, according to a conservative policy, since it relaxes the original constraints of A, and in particular 162 A. Bracciali and P. Torroni those preventing to explain the observation (IC′′), with amore generic constraint provided by B, e.g. {b,¬f,¬p → false} (a bird does not fly, unless it is a penguin). This example will be further elaborated later on, once the necessary notation is introduced. The various steps involve, to some extent, deduction, introspection, interaction, and revision, and will be further discussed to illustrate the global picture. Then we will focus on the integrity constraint revision process, as part of the overall framework. The rest of this paper is organised as follows: Section 2 recalls Abductive Logic Programming, Section 3 discusses the above mentioned general steps. Section 4 presents the formal model we devised to address constraint-based knowledge revision: an algebra of constraints, relevant constraint selection operators, and constraint updating/revising operators. Possible applications of the framework in knowledge negotiation scenarios are illustrated in Section 5. Concluding remarks and future work are summarised in Section 6. 2 Background on Abductive Logic Programming An Abductive Logic Program (ALP) is a triple 〈T,A, IC〉, where T is a theory, namely a Logic Program, A is the set of “abducible” predicates, and IC is a set of integrity constraints. According to [11], negation as default can be recovered into abduction by replacing negated literals of the form ¬ a with a new positive, abducible atom not a and by adding the integrity constraint← a, not a to IC. In line with [12], along with the abducible (positive) predicates, A will also contain all negated literals, which will be generally left implicit. Given an ALP and a goal G, or “observation”, abduction is the process of determining a set ∆ of abducibles (∆ ⊆ A), such that: T ∪∆ |= G, and T ∪ IC ∪∆ |= ⊥. For the sake of simplicity, we assume that T is ground and stratified. In this case, many of the semantics usually adopted for abduction, like stable model semantics or well-founded semantics, are known to coincide, and hence the symbol |= can be interpreted as denoting entailment according to any of them. If there exists such a set ∆, we call it an abductive explanation for G in 〈T,A, IC〉:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A new framework for knowledge revision of abductive agents through their interaction (preliminary report)

The aim of this work is the design of a framework for the revision of knowledge in abductive reasoning agents, based on interaction. We address issues such as: how to exploit knowledge multiplicity to find solutions to problems that agents could not individually solve, what information must be passed or requested, how can agents take advantage from the answers that they obtain, and how can they...

متن کامل

Abductive Expansion: Abductive Inference and the Process of Belief Changey

Unlike the AGM framework of belief revision, where new information is added to the current belief state as it is (while attempting to preserve logical consistency), we model the following more natural proposal: agents seek a justiication (explanation) for the new information before \accepting" it and incorporate this justiication into their current belief state together with the new information...

متن کامل

Abductive Expansion: Abductive Inference and the Process of Belief Change

Unlike the AGM framework of belief revision, where new information is added to the current belief state as it is (while attempting to preserve logical consistency), we model the following more natural proposal: agents seek a justiication (explanation) for the new information before \accepting" it and incorporate this justiication into their current belief state together with the new information...

متن کامل

Abduction and Learning

In this paper we study the problem of of integrating abduction and learning as they appear in Arti cial Intelligence. A general comparison of abduction and induction as separate inferences and elds in AI is given. Based on this analysis we study their possible interaction and integration. We introduce the notions of of abductive concept learning as a framework for learning with incomplete backg...

متن کامل

A Logical Formulation for Negotiation among Dishonest Agents

The paper introduces a logical framework for negotiation among (dis)honest agents. The framework relies on the use of abductive logic programming as a knowledge representation language for agents to deal with incomplete information and preferences. The paper shows how intentionally false or inaccurate information of agents could be encoded in the agents’ knowledge bases. Such disinformation can...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004